I am using the auth0-service and I have observed, that after a user logs in successfully, the callback-url is called with an additional string "#_=_"
How can I turn that off? I have already searched the documentaiton at auth0, but searching for "#_=_" does not yield reasonable results
I even had to escape that string with backslashes here in the SO-editor, because it was beeing interpreted...
Please can you offer more information - what library / SDK are you using? For example, what happens if you use:
https://{{YOUR_TENANT}}.auth0.com/authorize?client_id={{YOUR_CLIENT_ID}}&protocol=oauth2&redirect_uri=https://jwt.io&response_type=token id_token&scope=openid email&nonce=123&state=xyz
Just add https://jwt.io as an allowed callback URL in your client settings from Auth0 dashboard.
Bear in mind that for SPA apps etc, the tokens are returned using URL hash fragments - this is deliberate as only meant for Client side consumption - see here for quick description. Mention this only in case you were confusing correct behaviour?
Related
Our LinkedIn API calls started failing. Even the simplest /v1/prople/~ calls started erroring with This resource is no longer available under v1 APIs.
So we're trying to migrate stuff using the new /v2 way, but somehow it seems not to be working. For example (and after requesting a token with the new scopes), a simple request to /v2/me fails to return the fields we need (amongst others, headline and location). When asking explicitly for these fields, we're told that we don't have access to them - even tho the token was generated using the r_basicprofile r_liteprofile r_emailaddress scopes.
We've tried numerous combinations and variations of asking for certain fields, projections, formats, etc from the Microsoft docs - with no avail and we're wondering whether the /v2 API is actually something functional - is there anyone successful using it, and if so, how?
A sample CURL request with an obfuscated Bearer would be a good way for us to understand what we're doing wrong - but it seems that even the simplest requests verbatim from the docs just fail.
EDIT: After some research, it looks like Microsoft changed their versioned API behavior without being consistent in the docs. Some docs point to r_liteprofile and some others to r_basicprofile as the default way to go now without being "Linkedin Partners". We were previously requesting r_emailaddress too and the headline and location parts of the r_basicprofile bits were used in our code in many different places.
These were two problems:
Some of the fields are removed from v1 (headline, email, location etc),
Most of the fields requested are not available in v2 without special scopes, but these scopes are very poorly documented as being part of a "LinkedIn Partner" program our app has to be accepted in before we can now use them.
The basic answer to this question is that LinkedIn (Microsoft) made backward-incompatible changes to their API.
I have been using the Google Client API in a .NET web application - but need to update to the latest version (both to use the most recent code but also to lose the need for the DotNetOpenAuth.dll.) The latest version (1.8.1) has a totally redesigned OAuth interface (using google.apis.auth) and I can't seem to even get started w/ it.
Previously I had written code that handled generating an AuthorizationURL (as needed) and creating IAuthenticator and IAuthorizationState objects - storing the refresh token in a sql database as needed. I was also about to retrieve "UserInfo" about the user as needed (once authenticated.)
Now - I'm unclear on how to handle the generation of the AuthURL (do I have to do it 100% manually?) and how/what I need to pass to the BaseClientService.Initializer when working w/ client API (such as Google Drive.)
Also - previously I wrote methods to "store" and "retrieve" credentials from the database - now it seems I would need to write a class based on IDataStore? But I'm not sure if this is even correct (let alone find a decent sample/doc anywhere.)
Finally - it doesn't seem like google.apis.auth handles anything w/ regards to UserInfo - I have to grab google.apis.oauth2 - but that .dll has even LESS documentation/sample code out there.
Any advice on where to start? The google.apis sample code seems decent for performing basic api tasks but all the Oauth2 information is very basic, uses file data storage and seems glossed over.
Thanks!
First of all, take a look at https://developers.google.com/api-client-library/dotnet/guide/aaa_oauth. All the documentation that you might need is there, and if something is missing let us know!
You are right, you already have an implementation of FileDataStore, and we are planning to create EFDataStore as well for the next release.
I have a page tab app that I am hosting. I have both http and https supported. While I receive a signed_request package as expected, after I decode it does not contain page information. That data is simply missing.
I verified that like schemes are being used (https) among facebook, my hosted site and even the 'go between'-- facebook's static page handler.
Also created a new application with page tab support but got the same results-- simply no page information in the signed_request.
Any other causes people can think of?
I add the app to the page tab using this link:
https://www.facebook.com/dialog/pagetab?app_id=176236832519816&next=https://www.intelligantt.com/Facebook/application.html
Here is the page tab I am using (Note: requires permissions):
https://www.facebook.com/pages/School-Auction-Test-2/154869721351873?id=154869721351873&sk=app_176236832519816
Here is the decoded signed_request I am receiving:
{"algorithm":"HMAC-SHA256","code":!REMOVED!,"issued_at":1369384264,"user_id":"1218470256"}
5/25 Update - I thought maybe the canvas app urls didn't match the page tab urls so I spent several hours going through scenarios where they both had a trailing slash or not. Where they both had a trailing ? or not, with query parameters or not.
I also tried changing the 'next' value when creating the page tab to the canvas app url and the page tab url.
No success on either count.
I did read where because I'm seeing the 'code' value in the signed_request it means Facebook either couldn't match my urls or that I'm capturing the second request. However, I given all the URL permutations I went through I believe the urls match. I also subscribed to the 'auth.authResponseChange' which should give me the very first authResponse that should contain the signed_request with page.id in it (but doesn't).
If I had any reputation, I'd add a bounty to this.
Thanks.
I've just spent ~5 hours on this exact same problem and posted a prior answer that was incorrect. Here's the deal:
As you pointed out, signed_request appears to be missing the page data if your tab is implemented in pure javascript as a static html page (with *.htm extension).
I repeated the exact same test, on the exact same page, but wrapped my html page (including js) within a Perl script (with *.cgi extension)... and voila, signed_request has the page info.
Although confusing (and should be better documented as a design choice by Facebook), this may make some sense because it would be impossible to validate the signed_request wholly within Javascript without placing your secretkey within the scope (and therefore revealing it to a potential hacker).
It would be much easier with the PHP SDK, but if you just want to use JavaScript, maybe this will help:
Facebook Registration - Reading the data/signed request with Javascript
Also, you may want to check out this: https://github.com/diulama/js-facebook-signed-request
simply you can't get the full params with the javascript signed_request, use the php sdk to get the full signed_request . and record the values you need into javascript variabls ...
with the php sdk after instanciation ... use the facebook object as following.
$signed_request = $facebook->getSignedRequest();
var_dump($signed_request) ;
this is just to debug but u'll see that the printed array will contain many values that u won't get with js sdk for security reasons.
hope that helped better anyone who would need it, cz it seems this issue takes at the min 3 hours for everyone who runs into.
I'm trying to achieve urls in the form of http://localhost:9294/users instead of http://localhost:9294/#/users
This seems possible according to the documentation but I haven't been able to get this working for "bookmarkable" urls.
To clarify, browsing directly to http://localhost:9294/users gives a 404 "Not found: /users"
You can turn on HTML5 History support in Spine like this:
Spine.Route.setup(history: true)
By passing the history: true argument to Spine.Route.setup() that will enable the fancy URLs without hash.
The documentation for this is actually buried a bit, but it's here (second to last section): http://spinejs.com/docs/routing
EDIT:
In order to have urls that can be navigated to directly, you will have to do this "server" side. For example, with Rails, you would have to build a way to take the parameter of the url (in this case "/users"), and pass it to Spine accordingly. Here is an excerpt from the Spine docs:
However, there are some things you need to be aware of when using the
History API. Firstly, every URL you send to navigate() needs to have a
real HTML representation. Although the browser won't request the new
URL at that point, it will be requested if the page is subsequently
reloaded. In other words you can't make up arbitrary URLs, like you
can with hash fragments; every URL passed to the API needs to exist.
One way of implementing this is with server side support.
When browsers request a URL (expecting a HTML response) you first make
sure on server-side that the endpoint exists and is valid. Then you
can just serve up the main application, which will read the URL,
invoking the appropriate routes. For example, let's say your user
navigates to http://example.com/users/1. On the server-side, you check
that the URL /users/1 is valid, and that the User record with an ID of
1 exists. Then you can go ahead and just serve up the JavaScript
application.
The caveat to this approach is that it doesn't give search engine
crawlers any real content. If you want your application to be
crawl-able, you'll have to detect crawler bot requests, and serve them
a 'parallel universe of content'. That is beyond the scope of this
documentation though.
It's definitely a good bit of effort to get this working properly, but it CAN be done. It's not possible to give you a specific answer without knowing the stack you're working with.
I used the following rewrites as explained in this article.
http://www.josscrowcroft.com/2012/code/htaccess-for-html5-history-pushstate-url-routing/
Google plus returns ajax requests with )]}' on first line. I heard it is protection against XSS. Are there any examples what and how could anyone do with this without that protection ?
Here's my best guess as to what's happening here.
First off, there are other aspects of the google json format that aren't quite valid json. So, in addition to any protection purposes, they may be using this specific string to signal that the rest of the file is in google-json format and needs to be interpreted accordingly.
Using this convention also means that the data feed wont execute from a call from a script tag, nor by interpreting the javascript directly from an eval(). This ensures front end developers are passing the content through a parser, which will keep any implanted code from executing.
So to answer your question, there are two plausible attacks that this prevents, one cross-site through a script tag, but the more interesting on is within-site. Both attacks assume that:
a bug exists in how user data is escaped and
it is exploited in a way that allows an attacker to inject code into one of the data feeds.
As a simple example, lets say a user figured out how to take a string like example
["example"]
and changed it to "];alert('example');
[""];alert('example');"]
Now if when that data shows up in another user's feed, the attacker can execute arbitrary code in the user's browser. Since it's within site, cookies are being sent to the server and the attacker could automate things like sharing posts or messaging people from the user's account.
In the Google scenario, these attacks won't work for a number of reasons. The first 5 characters will cause a javascript error before the attack code is run. Plus, since developers are forced to parse the code instead of accidentally running it through an eval, this practice will prevent code from being executed anyway.
As others said, it's a protection against Cross Site Script Inclusion (XSSI)
We explained this on Gruyere as:
Third, you should make sure that the script is not executable. The
standard way of doing this is to append some non-executable prefix to
it, like ])}while(1);. A script running in the same domain can
read the contents of the response and strip out the prefix, but
scripts running in other domains can't.