How would one authenticate and hit a HTTPS URL from LUA? In particular to get the resultant response back from the server?
Background: Basically want to use the https://docs.pushbullet.com/http/ API. I can see how to use it in curl, however I want to use it within LUA, and really want to get the responses back to see them.
You can use LuaSec and LuaSocket. Require ssl.https and use it like socket.http.
Related
My use case is that I have pretty large files (>2GB, these are Cloud Optimized Geotiffs) on Google Cloud Storage, which can be used in applications through HTTP range requests.
I would like to filter out requests that are missing the Range header.
This would avoid the case of users downloading the whole file. (I guess someone could still make a range request for the whole file with a bit of work, but i am not concerned about this.)
The documentation (https://firebase.google.com/docs/storage/security/rules-conditions#request_evaluation) says "HTTP headers and authentication state are also included", so I would expect to be able to use this information in the security rules.
Is it possible at all and if it is, how?
I cannot find any example of using HTTP headers in the security rules conditions. I have also tried the rules playground in Firebase, but didn't figure out how to access the request headers.
It doesn't seem like there's any way to access HTTP headers. The only request variables are those in the document
You can try the request.params variable which will be populated with query params present in the request
eg. <firebase storage url>?myParam=true -> request.params.myParam == "true" should work
No, it is not possible to filter requests depending on HTTP headers.
The request variable in the security rules does not include HTTP headers. (As stated by a firebaser in the comments of Roopa M's answer.) The documentation has been updated since this question was asked, and does not state any longer that this information is included.
Roopa M's answer gives an idea to filter requests based on query parameters, which might help you, but is independent from HTTP headers.
In order to really handle queries according to HTTP headers in the context of firebase, it is probably necessary to rely on a cloud function that will act as middleware. These have access to the full HTTP request if i am not mistaken.
Alternatively, this kind of rule should be reasonably easy to implement in a regular web server like Nginx, if you have the option to build your project in such an environment.
All RESTful examples seem to only consider the RESOURCE at that level, e.g.
HTTP GET http://www.appdomain.com/users
HTTP GET http://www.appdomain.com/users?size=20&page=5
HTTP GET http://www.appdomain.com/users/123
HTTP GET http://www.appdomain.com/users/123/address
What if I want users and address to be returned at the same time? E.g. users, address and anything underneath? What is the RESOURCE naming standard then? Or is there another approach? E.g.
HTTP GET http://www.appdomain.com/usersALLdata/123
Is this correct? Or should it be? HTTP GET http://www.appdomain.com/users/123/address/xyz... Seem an issue if address were to have xyz and abc tables both as children. Cannot find any guidance on this.
My conclusion is you must add parameters. E.g. like so https://api.github.com/users/zellwk/repos?sort=pushed
meaning we would use:
HTTP GET http://www.appdomain.com/users/123?ALL=true
Looking for confirmation.
There is no standard way to do this unless you decide to follow a standard API building solution. But even those just recommend certain URIs. http://docs.oasis-open.org/odata/odata/v4.01/odata-v4.01-part2-url-conventions.html
Normally I would not care much, just respond with a verbose JSON unless I really want to support some sort of advanced search.
I am migrating a website and it has many redirections. I would like to generate a list in which I can see all redirects, target and source.
I tried using Cyotek WebCopy but it seems to be unable to give the data I need. Is there a crawling method to do that? Or probably this can be accessed in Apache logs?
Of course you can do it by crawling the website, but I advise against it in this specific situation, because there is an easier solution.
You use Apache, so you are (probably) working with HTTP/HTTPS protocol. You could refer to HTTP referrer, if you use PHP, then you can reach the previous page via $_SERVER['HTTP_REFERER']. So, you will need to do the following:
figure out a way to store previous-next page pairs
at the start of each request store such a pair, knowing what the current URL is and what the previous was
maybe you will need to group your URLs and do some aggregation
load the output somewhere and analyze
I'm having real trouble getting Informatica PowerCenter or Developer to call a URI based REST API and I'm doing it for something simple (JIRA's API). Basically I want to call JIRA's worklog REST API which is a different URL for a list of issue ids and write it to our DB.
https://docs.atlassian.com/jira/REST/6.2/
/rest/api/2/issue/{issueIdOrKey}/worklog
Informatica PowerCenter supports only HTTP transformation which is only a simple GET. Unfortunately the latest version is still stuck in the 'old' query type URL building where they append inputs into search strings. E.g. if I have a "key" input field with value "ABC-1" and the URL is jira/rest/api/2/search it would actually build the URL on the fly into jira/rest/api/2/search?key=ABC-1. While some of JIRA's API works this way, some use the URI way e.g. jira/rest/api/2/ABC-1/worklog which requires embedding the value into the URI. There's no way I can get this to work :-
if I do jira/rest/api/$key/worklog it still converts the URI into jira/rest/api/$key/worklog/?key=ABC-1 so $key does not get replaced
even if i pre-build the URI outside the mapping it's not feasible as the URI needs to be dynamic to the list of JIRA keys and anyway because it appends ? at the end JIRA throws an error (because ? is a reserved key word for this API)
HTTP transformation does not support NTLMv2 authentication which our company's JIRA instance may upgrade to shortly
Last resort is to use a Java transformation in which Informatica has quite little value add. This also means I need to somehow pass in the JIRA user password for authentication which is a separate challenge (versus just storing as a HTTP connection)
Informatica Developer supports REST Web Consumer Transformation but has similar limitations with only building query type URL. Even worse I can't even dynamically build the URL since it's fixed to the HTTP connection object URL.
Am I straight outta luck?
I got the query and here I would like to answer about this. I can write here only points and it might be you won’t be able to understand that thing properly. So Here I am putting link of blog where the task "how to informatica read rest api is mentioned in detail step by step with video tutotial. Some examples are also there. Feel free to visit
https://zappysys.com/blog/read-json-informatica-import-rest-api-json-file/
Hope it will help.
I'm using google custom search engine with a search engine that searches specific sites and excludes some patterns in these sites.
I'm testing the api locally and I receive 12 results. I test the same exact call in staging (heroku us region) and I receive 410 results.
Does google personalise the results when using a custom search engine?
If yes, how do I turn it off? If no, do you have any idea why am I seeing this difference?
Update
Ok I did a test. I issued the exact same request by using a proxy and not, and the results are different (vastly).
Now, the question is, can this behaviour be disabled?
Ok found it. By specifying the userIp param https://developers.google.com/custom-search/json-api/v1/using_rest it will force google to use the same behaviour regardless of location.