Bit.ly Sends A Request To My Long URL During Creation - url-shortener

I am using bit.ly to shorten a long, one time URL that I am sending via SMS to a user to verify their phone number. The problem is that Bit.ly seems to be sending a request to this URL at the time of the short URL creation, I assume to validate it on their end before shortening. This is causing the user's phone number to be verified before the user even gets the link to click it.
Is there a solution here? I don't really want to add an additional step to verification process. It should be just a one action event required by the user (clicking the link). Can I easily determine and ignore (or handle differently) the bit.ly request to the URL somehow?

Found the answer.
All requests from Bitly should have the User-Agent "bitlybot", which makes it possible to ignore such requests.

Related

Is it possible to redirect back to a POST or PUT request URL after a successful SAML SSO assertion?

I have read about the relayState parameter in SAML SSO, and how the SP can redirect the user back to the original incoming URL by making use of relayState, but to my knowledge HTTP redirect only works for GET requests.
I am wondering if the same can be done for POST, PUT and DELETE requests, considering these requests usually come with data in the request body as well. I thought of returning a self-submitting form for POST requests, but this won't work for any other HTTP verb, and the original request must be a form based request too, unless the framework supports treating all types of parameters (query string, form field, json element) similarly. I also thought of making the frontend reconstruct the original request and sending it back to SP with AJAX, but I'm not sure if AJAX can actually update the browser's current page address.
My compromise solution in the end was to only relay URLs that result in a whole new page rendering with a GET verb only, and for any other requests, use the referrer URL for relaying instead. This means for the latter, the user will have to perform the task manually again after landing on the page he last saw before the SSO flow.
Not sure what the common practice in the industry is.
Thank you!
you would to maintain / save the POST data on the SP end and re-use them after SAML flow succeed. SAML as such does not provide any mean to achieve this.

JMeter: auth2.0 Authentication Process (B2C Architecture)

Steps:
Hitting the website- It is being redirected to an URL which contains parameters such as STATE, NONCE and CLIENT-REQUEST-ID which are dynamic.
So, in JMeter, I am unable to fetch those values as those are coming directly in a HTTP request.
Any Idea, how to fetch it?
While clicking on sign in with credentials, authentication process is happening which is generating a token id.
Then in next request, redirects occur and same kind of URL is achieved (as in step1). Again same parameters are passed.
And with this request, Access token is generated.
I am unable to fetch those parameter (nonce, state, client request id). Is there anything we can do?
According to Microsoft, client-request-id is optional (so you can probably just leave it off) and if I read this right is generated by the client. So you may be able to just generate a random GUID in JMeter.
If you're being redirected to an URL which contains the parameters you're looking for you should be able to capture them from the sub-sampler
using a suitable Post-Processor like Regular Expression Extractor
Also some values like consumer key are static and never change and some values like nonce are random
If you don't need to load test the OAuth login challenge itself you can ask developers or administrators to provide you a permanent token which you can send in the Authorization header using HTTP Header Manager
Yes, you are correct but in my case I am not getting any sub-sampler(s).
That's where trouble lies!
Also, those parameters are coming from 3rd Party which is hosting the site(not in the hands of Devs)..
The whole process I am doing is for load testing.
So, any thing you wanna add for this?

detecting link checkers (spam filter) in incoming HTTP requests

We have a site that uses a "one-time" login process for password resets which are not initiated by the user themselves. (for instance, a password reset that is initiated by an admin or another employee) A URL is sent to the user via email which can then be used to reset their password. The URL can only be visited one time. (there's more to this for security-sake but I'll keep it simple) Recently, some users have complained that when they visit the link, it has already expired. The end result is that they can't reset their passwords using this feature. We discovered that the users in question have a spam filter or "link checker" in their environment that they do not have access to. This device visits the one-time link before the user is able to, to make sure its safe.
I'm trying to solve this issue and was wondering if there's a way I can detect these type of devices on the web server when the request is made? When the spam filter visits the link, is there something in the http request that would stand apart from a regular browser? Maybe they all use a specific custom HTTP header? Or maybe there's a regex I could use on the user agent? I haven't been able to catch one of these yet, so I'm not sure what the request looks like coming from a spam filter.
Anyone know of a way to detect spam filters of any vendor by looking at the http requests? I know it's a long shot but maybe they all use a specific header for reasons such as this?
I got approval to modify the design to remove the one-time aspect of the URL. This solves the issue and saves me the headache. Thanks for the suggestion, #PeeHaa

Can you send sub-sequential HTTP POST request to a sever

I'm just getting started in HTTP POST requests. So much so that I've got no idea if this is even going to solve my problem, but it seems like an interesting thing to learn about either way. Anyway, I currently do the following with a webbrowser control:
Navigate to a page
Enter username and password
Click the 'login' button
Enter some text into textboxes
Click another button (which loads a confirm page)
Click the confirm button
My question is does the HTTP POST request thing allow for the webclient to stay logged into the webpage, does it allow for posting to the page and then posting again once the update page has been received (steps 4, 5 and 6).
So you want to scrape some web content or manipulate a site from a program or script, but you're having a hard time. No, just switching to a POST will not help you here. Often, the problem has to do with authentication. What you need to do is preserve your session across more than one HTTP request, whether the requests are POST, GET, HEAD, DELETE, PUT, UPDATE, etc.
As mentioned in a comment, HTTP requests are stateless, where each request is independent of the others. However, web servers will still maintain some information for individual sessions, and so you usually still need more than one request. However, I find that much of the time, exactly two requests are enough to accomplish an action on a web site.
The first request will POST your login information to the site. At this point, the web site will issue a response. You need to analyze this response, because somewhere in there will be a session key. Now when I tell you to analyze the response, I don't mean that you write code to do this... that will come later. You need to actually send a sample request record the response, and read through it with your own eyes to find the session key. You also need to know how the web server expects to find the session key on future requests.
In this process, it's important to remember that a response consists of more than just HTML. In fact, the most common location for this key is in a cookie. Once you know how to get the session key, you need to make sure your next request includes that session key as part of the request. This is how the web site will know who you are, that you are authorized to perform the desired action, and what information to return.
The second request will actually perform the desired action. This could be a simple GET request, if all you want to do is retrieve some information from the site. It may also be POST, if you need to tell the site to perform some action.
To know what your requests need to look like, you can use a special kind of http proxy. Fiddler is a popular choice. You install the proxy to your computer, and then perform the desired action from a regular web browser. Fiddler will then tell you what requests and responses were sent. Even if you need to view a number of pages to complete your action via your web browser, often you still only need the final request to actually accomplish your goal. You use the information provided by fiddler to find and duplicate the required requests.
In the .Net world, the best tool for sending these requests and evaluating the responses is generally not the WebBrowser control. Instead, take a look at the System.Net.WebClient class, or look at System.Net.HttpWebRequest/System.Net.HttpWebResponse.

Executing javascript during redirect without changing original referrer

I need to test whether or not a click-through is valid by using some javascript client-side tests (e.g., browser window dimensions).
However, I would like the original click referrer to remain the same. Is there a way I can do a redirect, execute some javascript, capture the browser details and then continue the click-through while keeping the original referrer value the same?
If there isn't, then simply include the referrer as one of the "browser details" that you capture and send back with the redirection instruction. The referrer probably isn't available on the client automatically, so it will work like this:
Client sends initial request, presumably including a referrer.
Server dynamically generates the client-side-testing page, including the referrer in a Javascript variable.
Client collects client attributes, including the referrer value stored in step 2.
Client sends collected attributes to server with new redirection request.
Server records referrer parameter somewhere, although not in the HTTP logs since the Referer header won't have the same value as what the Javascript request sent.
Of course, you realize none of this is reliable anyway because it all depends on the client including the Referer header in step 1, and there's no guarantee that will happen, or if it does happen, that the value you get is accurate. I also question the wisdom of doing client-side checks (especially of something as arbitrary as window dimensions) to determine the validity of a navigation request.