I'm just getting started in HTTP POST requests. So much so that I've got no idea if this is even going to solve my problem, but it seems like an interesting thing to learn about either way. Anyway, I currently do the following with a webbrowser control:
Navigate to a page
Enter username and password
Click the 'login' button
Enter some text into textboxes
Click another button (which loads a confirm page)
Click the confirm button
My question is does the HTTP POST request thing allow for the webclient to stay logged into the webpage, does it allow for posting to the page and then posting again once the update page has been received (steps 4, 5 and 6).
So you want to scrape some web content or manipulate a site from a program or script, but you're having a hard time. No, just switching to a POST will not help you here. Often, the problem has to do with authentication. What you need to do is preserve your session across more than one HTTP request, whether the requests are POST, GET, HEAD, DELETE, PUT, UPDATE, etc.
As mentioned in a comment, HTTP requests are stateless, where each request is independent of the others. However, web servers will still maintain some information for individual sessions, and so you usually still need more than one request. However, I find that much of the time, exactly two requests are enough to accomplish an action on a web site.
The first request will POST your login information to the site. At this point, the web site will issue a response. You need to analyze this response, because somewhere in there will be a session key. Now when I tell you to analyze the response, I don't mean that you write code to do this... that will come later. You need to actually send a sample request record the response, and read through it with your own eyes to find the session key. You also need to know how the web server expects to find the session key on future requests.
In this process, it's important to remember that a response consists of more than just HTML. In fact, the most common location for this key is in a cookie. Once you know how to get the session key, you need to make sure your next request includes that session key as part of the request. This is how the web site will know who you are, that you are authorized to perform the desired action, and what information to return.
The second request will actually perform the desired action. This could be a simple GET request, if all you want to do is retrieve some information from the site. It may also be POST, if you need to tell the site to perform some action.
To know what your requests need to look like, you can use a special kind of http proxy. Fiddler is a popular choice. You install the proxy to your computer, and then perform the desired action from a regular web browser. Fiddler will then tell you what requests and responses were sent. Even if you need to view a number of pages to complete your action via your web browser, often you still only need the final request to actually accomplish your goal. You use the information provided by fiddler to find and duplicate the required requests.
In the .Net world, the best tool for sending these requests and evaluating the responses is generally not the WebBrowser control. Instead, take a look at the System.Net.WebClient class, or look at System.Net.HttpWebRequest/System.Net.HttpWebResponse.
Related
I have read about the relayState parameter in SAML SSO, and how the SP can redirect the user back to the original incoming URL by making use of relayState, but to my knowledge HTTP redirect only works for GET requests.
I am wondering if the same can be done for POST, PUT and DELETE requests, considering these requests usually come with data in the request body as well. I thought of returning a self-submitting form for POST requests, but this won't work for any other HTTP verb, and the original request must be a form based request too, unless the framework supports treating all types of parameters (query string, form field, json element) similarly. I also thought of making the frontend reconstruct the original request and sending it back to SP with AJAX, but I'm not sure if AJAX can actually update the browser's current page address.
My compromise solution in the end was to only relay URLs that result in a whole new page rendering with a GET verb only, and for any other requests, use the referrer URL for relaying instead. This means for the latter, the user will have to perform the task manually again after landing on the page he last saw before the SSO flow.
Not sure what the common practice in the industry is.
Thank you!
you would to maintain / save the POST data on the SP end and re-use them after SAML flow succeed. SAML as such does not provide any mean to achieve this.
I am currently learning to make DRF APIs for something I am working on. I was wondering how exactly I would secure the API POST requests I send via the client side?
For example, let's say I have a log in form where the user can enter their information, this information needs to be send to (or POST-ed to) my API for verification. I do not want just anyone sending requests to the server and so, I would want to use an API key but since this is being done on a website, anyone could see the API key if they wanted to, and then exploit the server by sending a ton of requests.
My current idea is to use serializes in DRF to check if the API POST request has everything it needs but I am fairly certain this can be easily found by checking what sort of JSON my code sends to the server, so how exactly do I go about securing this such that I can send the information to the bare domain (like http://127.0.0.1:8000) and then have code which can accept that information?
I apologize for any confusion, if it is confusing. Let me know if you need any clarification.
If you are creating API any one can send request to server. same goes for website and webpage. Their is no way you can avoid this. But their are ways to handle possible misuse.
like using CAPTCHA for login form which can be filled by one on the web. though wrong CAPTCHA text can be send by anyone you must check it on server for their correctness. or use google reCAPTCHA like services for outsourcing this task.
API key should be given after login NOT before login. and if it is given after successful login then the key is obtained by legitimate user which can obviously do whatever he is allowed to do on website. their should not be problem in that.
further explanation to the question will lead to details of denial-of-service i.e DOS attack. you should consult expert on that field if your application requires to handle DOS attack.
This is a theoretical question. For some APIs, user need to authenticate themselves and we have authentication token for a user. I feel using GET api is not good idea due to this token.
/get_data/?user_token=hshhlj8979kjhk&dataid=87979
Indeed it's not a good idea, but not due to GET in itself. The real problem is the token as part of the URL and the security problems it creates.
The URL portion of a request is very often cached and logged for auditing or debugging purposes, and having the token there causes it to leak unintentionally.
For example, browsers save your browsing history, and the main portion they record is the URL, so there goes your password to your history, a place it doesn't belongs and is easily exposed accidentally.
Most web servers by default also log the URLs they receive, so again there goes your token. It's quite common for it to end up in logs on web servers, load balancers, intermediate routers and so on, again leaking all over the place.
The solution to this is to strip the token from the URL portion, leaving there only data that's not security-critical. The most common place to put it is in the request's headers. Those are well respected by the HTTP standard and almost never logged or accidentally dumped like the URL.
Of course, all other methods suffer the same. POST, PUT, DELETE, OPTIONS for example, none of them should be ever called with secret data in the URL. Headers provide a "safer" place for that available across all methods. The request body is another common place, but you can't have one in GET, making a header the best alternative.
Context:
I have written a simple polling app using the PERN stack (Postgres, Express, ReactJS, and NodeJS). The client sends a GET request to the server, this returns the question data and displays it to the user. The user then selects option A or B. This is then sent via a POST request to the server which then triggers the database to be updated.
My issue is that anybody can view the RAW HTML of the client and see the server URL and send a POST/GET request of the same format themselves. Even if I used an authentication token, surely somebody could view the RAW HTML again, see the GET request and do it themselves?
It's possible I am completely missing something here so any help would be greatly appreciated.
The question is a little bit unclear. If you mean the application layer, there is no way to prevent it unless you disconnect it from the internet which is not possible. There are some ways that you can reduce the load and keep track of each request - the only thing that everyone does.
Limiting requests / Rate limit
If there are some public endpoints which don't include any authentications, it's better to limit the number of requests that have been sent from the same user in a specific period of time. There are some packages that you might check. This will help to reduce the server's load.
Authentication
You may want to keep track of users' requests, then you should authenticate them using a token or something like that. In that case, even the clients that send a request to you, won't change any data on the backend until they get authorized.
Network layer
You can disable the public access to your server and allow only specific IPs to send requests (whitelisting). Even in that case, you don't prevent sending requests, you just ignore them. The other way is to use a CDN provider to protect your services from DDoS attacks.
We keep playing this cat and mouse game with Robinhood.com. I have a trading app which used to trade stocks with Robinhood, but they keep changing the unsupported unofficial API to make it difficult for traders to use. I know that many people are doing the same thing and I want to reach out to them to see if there is a new answer. The latest problem is when I try to get a Bearer token using the URL https://api.robinhood.com/oauth2/token/ the API returns the following JSON: {"detail":"This version of Robinhood is no longer supported. Please update your app or use Robinhood for Web to log in to your account."}. This started happening on 4/26/2019.
Has anyone found a work around for this, yet, or have they finally beaten us into submission?
A more complete solution (not need browser):
Use requests.session.
Obtain the login page by making a GET request to "https://robinhood.com/login".
At this point the session's cookies will contain 'device_id'.
Obtain this device_id and use it in making the oauth2 token request to "https://api.robinhood.com/oauth2/token/" also add in the data request "challenge_type" (either "sms" or "email").
This request will fail with a 400 error code. Robinhood will send an SMS message or Email with a temporary (5 minute) code.
Also at this point use the 400 response's body to get "id" from "challenge" inside of the JSON object.
Confirm the challenge by making a POST request to "https://api.robinhood.com/challenge/CHALLENGEID/respond/" where CHALLENGEID is the same id mentioned in the first failed /oauth2/token/ POST request.
Make the same POST request to "https://api.robinhood.com/oauth2/token/" and include in the header "X-ROBINHOOD-CHALLENGE-RESPONSE-ID" with the value CHALLENGEID.
You can reuse a device_id with user/pass after this even after logging out.
Be cautious with storing device_id as it is the result of user/pass login and successful SMS/email 2FA.
Just got it working. At the risk of them seeing this post and changing it more, here we go:
First, you're going to want to log into your RH account in a web browser
View Source on the page, and look for clientId - it should be a big hex number separated by dashes
Add that number to your POST requests to /oauth2/token under the field device_token
There's probably another way to retrieve the device token, and I'm not even sure it's unique, but that way should work.
Good to be back here after a very long time.
Not sure if anyone is still looking for answers to this, but I have a very simple solution.
At Robinhood's login screen, enter your username/email and your password, press F12 on your keyboard to bring up the console panel and switch to the "Network" tab then wait for the page to load completely. (During this time you will see a list of items being loaded rapidly depending on the connection speed.)
At this time you can keep clearing the list by clicking on the button highlighted in the below image.
Click on button highlighted repeatedly until the list is empty
Now, log into your Robinhood account. At this point your console should display a list similar to the one shown below.
Look for the name "token/", most likely it will be the second one you get all the information you need. And this information will be under the Headers then Request Payload
I was able to find this with past knowledge and experience of web scraping for fun. And also, I needed to know this as well, since I recently started doing trades via Robinhood.
Hope this help you curious ones out there.
For my Robinhood account I am using Google Authenticator for my 2FA. What I have so far is that I send the original call that I was sending before to https://api.robinhood.com/oauth2/token/. This is giving me a response of:
{"mfa_required":true,"mfa_type":"app"}
I then repeat my oauth token request, but this time providing the value from Google Authenticator (so my GUI has to prompt me to fill it in) with this payload in the request to https://api.robinhood.com/oauth2/token/:
{"grant_type":"password","scope":"internal","client_id":"c82SH0WZOsabOXGP2sxqcj34FxkvfnWRZBKlBjFS","expires_in":86400,"device_token":"***","username":"***","password":"****","mfa_code":"***"}
and then I am getting an access token in reply