I have a page tab app that I am hosting. I have both http and https supported. While I receive a signed_request package as expected, after I decode it does not contain page information. That data is simply missing.
I verified that like schemes are being used (https) among facebook, my hosted site and even the 'go between'-- facebook's static page handler.
Also created a new application with page tab support but got the same results-- simply no page information in the signed_request.
Any other causes people can think of?
I add the app to the page tab using this link:
https://www.facebook.com/dialog/pagetab?app_id=176236832519816&next=https://www.intelligantt.com/Facebook/application.html
Here is the page tab I am using (Note: requires permissions):
https://www.facebook.com/pages/School-Auction-Test-2/154869721351873?id=154869721351873&sk=app_176236832519816
Here is the decoded signed_request I am receiving:
{"algorithm":"HMAC-SHA256","code":!REMOVED!,"issued_at":1369384264,"user_id":"1218470256"}
5/25 Update - I thought maybe the canvas app urls didn't match the page tab urls so I spent several hours going through scenarios where they both had a trailing slash or not. Where they both had a trailing ? or not, with query parameters or not.
I also tried changing the 'next' value when creating the page tab to the canvas app url and the page tab url.
No success on either count.
I did read where because I'm seeing the 'code' value in the signed_request it means Facebook either couldn't match my urls or that I'm capturing the second request. However, I given all the URL permutations I went through I believe the urls match. I also subscribed to the 'auth.authResponseChange' which should give me the very first authResponse that should contain the signed_request with page.id in it (but doesn't).
If I had any reputation, I'd add a bounty to this.
Thanks.
I've just spent ~5 hours on this exact same problem and posted a prior answer that was incorrect. Here's the deal:
As you pointed out, signed_request appears to be missing the page data if your tab is implemented in pure javascript as a static html page (with *.htm extension).
I repeated the exact same test, on the exact same page, but wrapped my html page (including js) within a Perl script (with *.cgi extension)... and voila, signed_request has the page info.
Although confusing (and should be better documented as a design choice by Facebook), this may make some sense because it would be impossible to validate the signed_request wholly within Javascript without placing your secretkey within the scope (and therefore revealing it to a potential hacker).
It would be much easier with the PHP SDK, but if you just want to use JavaScript, maybe this will help:
Facebook Registration - Reading the data/signed request with Javascript
Also, you may want to check out this: https://github.com/diulama/js-facebook-signed-request
simply you can't get the full params with the javascript signed_request, use the php sdk to get the full signed_request . and record the values you need into javascript variabls ...
with the php sdk after instanciation ... use the facebook object as following.
$signed_request = $facebook->getSignedRequest();
var_dump($signed_request) ;
this is just to debug but u'll see that the printed array will contain many values that u won't get with js sdk for security reasons.
hope that helped better anyone who would need it, cz it seems this issue takes at the min 3 hours for everyone who runs into.
Related
I was creating an API for TD Ameritrade (my first time creating or dealing with APIs) and I needed to put in my own call back URL. I know that callback URL is where the API sends information to and i heard that I can just use my localhost API. I scoured the internet and I dont know how that would work and I was wondering if i can just use http://localhost?
Sorry if I seem like a noob because I am
In short, yes.
Follow the excellent directions at
https://www.reddit.com/r/algotrading/comments/c81vzq/td_ameritrade_api_access_2019_guide/. (Even with them, I spent excessive time on trial and error!)
Since stackoverflow has a limit of 8 links in a response, and the localhost text string looks like a link, I’m showing it with the colon replaced by a semicolon, i.e., http;//localhost to reduce the link count. Sorry.
I used the Chrome browser after first trying Brave, which did not work for, possibly because of my option selections.
Go to https://developer.tdameritrade.com/user/me/apps
Add a new app using http;//localhost (delete existing app if there is one).
Copy the resulting consumer key text string (AKA client_id or OAuth User ID).
Go to https://developer.tdameritrade.com/content/simple-auth-local-apps, follow instructions. Note: leading/trailing blanks were inserted by MSWord due to copy/paste of the auth code, which had to be manually deleted after wasting excessive time identifying the problem. The address string looks like:
https://auth.tdameritrade.com/auth?response_type=code&redirect_uri=http%3A%2F%2Flocalhost&client_id=ConsumerKeyTextString%40AMER.OAUTHAP
This returns a page stating the server refused to connect, but the address bar now contains a VeryLongStringOfCharacters in the address bar:
https;//localhost/?code= VeryLongStringOfCharacters
Copy the contents of the address bar, go to https://www.urldecoder.org/, decode the above, and extract the text after “code=”. This is your refresh_token
Go to: https://developer.tdameritrade.com/authentication/apis/post/token-0, fill out the fields with
grant_type=authorization_code
refresh_token=<<blank>>
access_type=offline
code=RefreshTokenTextString
client_id=ConsumerKeyTextString#AMER.OAUTHAP
redirect_uri=http://localhost
Press SEND.
If the resulting page starts with HTTP/1.1 200 OK, you have succeeded.
Try updating your redirect to:
redirect_uri=https://localhost
They may require https now and you need a colon instead of a semicolon. Everything looks correct. This process generally takes me more then one attempt, and 15 minutes to an hour to get my refresh token squared away every 90 days.
dont use #AMER.OAUTHAP in client_id
If you generate a new code and based on that try to get a new access token. it should work.
I'm trying to a CSRF protection to an existing MVC4 web application which uses DevExpress grids. I've added the Html.AntiForgeryToken() into the forms on the aspx pages (which contain ascx as partials containing the grids) and can see the __RequestVerificationToken and it's value clearly in developer tools when a save is called.
I've tried commenting out all my ValidateAntiForgeryToken attributes bar one - I went with the delete post method for simplicity (And also to eliminate the DevExpress grids messing with it) and I still keep coming up against this error:
There was a HttpAntiForgeryException
Url: http://localhost:54653/Users/Delete/f86ad393-0039-44e8-beed-a66dbab9266e?ReturnURL=http%3A%2F%2Flocalhost%3A54653%2FUsers
The exception message is
The required anti-forgery form field "__RequestVerificationToken" is not present.
Does anyone have any idea why this might be happening? Could it be that the error is non-descriptive and it's actually that the token doesn't match rather than that it doesn't exist? In previous answers to this question people just say "oh, you have to add the token," which is obviously not helpful here.
Are you submitting the form manually through Ajax? If that's the case, you need to pass the anti forgery token as another parameter with the name "__RequestVerificationToken".
Point 1 : Make sure if your application is has https secure protocol. Please load in https.
Point 2 : In case of DevExpress you have to call in the below pattern.
ViewContext.Writer.Write(Html.AntiForgeryToken().ToHtmlString());
After struggling with this for days I had a thought - maybe the browser is stopping the cookie being written. I did a search for dev servers and cookies not being written, and found that with Chrome and IE10 and up that there's problems writing the cookies.
I downloaded Firefox and tried it with that and it worked instantly. I then reapplied all the validate attributes to the all the controller methods and the all worked, every single one of them! Even the DexExpress postbacks seem to be working correctly.
I'll carry out more exhaustive testing, but for now, I think we're there.
Not exactly. If MVC AntiForgeryToken is already defined on page where you are using MvcxGridView and you want to protect grid actions you should send this token back to server during grid client side begin callback event.
settings.ClientSideEvents.BeginCallback = "function(s,e) { e.customArgs[\"__RequestVerificationToken\"] = $('input[name=\"__RequestVerificationToken\"]', $(s.GetMainElement())).val(); }";
I'm trying to achieve urls in the form of http://localhost:9294/users instead of http://localhost:9294/#/users
This seems possible according to the documentation but I haven't been able to get this working for "bookmarkable" urls.
To clarify, browsing directly to http://localhost:9294/users gives a 404 "Not found: /users"
You can turn on HTML5 History support in Spine like this:
Spine.Route.setup(history: true)
By passing the history: true argument to Spine.Route.setup() that will enable the fancy URLs without hash.
The documentation for this is actually buried a bit, but it's here (second to last section): http://spinejs.com/docs/routing
EDIT:
In order to have urls that can be navigated to directly, you will have to do this "server" side. For example, with Rails, you would have to build a way to take the parameter of the url (in this case "/users"), and pass it to Spine accordingly. Here is an excerpt from the Spine docs:
However, there are some things you need to be aware of when using the
History API. Firstly, every URL you send to navigate() needs to have a
real HTML representation. Although the browser won't request the new
URL at that point, it will be requested if the page is subsequently
reloaded. In other words you can't make up arbitrary URLs, like you
can with hash fragments; every URL passed to the API needs to exist.
One way of implementing this is with server side support.
When browsers request a URL (expecting a HTML response) you first make
sure on server-side that the endpoint exists and is valid. Then you
can just serve up the main application, which will read the URL,
invoking the appropriate routes. For example, let's say your user
navigates to http://example.com/users/1. On the server-side, you check
that the URL /users/1 is valid, and that the User record with an ID of
1 exists. Then you can go ahead and just serve up the JavaScript
application.
The caveat to this approach is that it doesn't give search engine
crawlers any real content. If you want your application to be
crawl-able, you'll have to detect crawler bot requests, and serve them
a 'parallel universe of content'. That is beyond the scope of this
documentation though.
It's definitely a good bit of effort to get this working properly, but it CAN be done. It's not possible to give you a specific answer without knowing the stack you're working with.
I used the following rewrites as explained in this article.
http://www.josscrowcroft.com/2012/code/htaccess-for-html5-history-pushstate-url-routing/
I have a web page that has 70000 characters. As you know when doing translation through Google API you can only send up to 5000 characters at a time. Which means I have to send data to Google 14 times (70000/5000) which takes a lot of time and then my page is displayed. Is there a way to speed up the process?
Thanks
have you tried caching the translation?
If you were using some AJAX framework (you don't mention what your web page is created with eg c#) then you can make it faster by making the API call via the AJAX framework.
It would look something like this (psuedo-code since we don't know what you are using):
Serve web page (almost instant)
Web page starts AJAX call:
Break text into chunks
Foreach chunk
Translate via API
Append to the page
This way the user will see the page immediately, and will also see the translation appear piece by piece as it is processsed instead of having to wait until the end.
My best bet would be to generate a page in one language, then ask google to translate it trough HTTP and display result as your own, to make it seamless for user. I believe that is what Google Chrome does when translating web pages.
Example of URL that makes Google translate the whole web page:
http://translate.google.com/translate?hl=en&sl=ru&tl=en&u=http%3A%2F%2Flinux.org.ru%2F
Of course, another option is to use Google Translate API and cache result if page content is not changing frequently.
go to the Javascript file in Google, it will lead you also to the CSS file, make a file or perhaps two, or you may be able to add CSS to your own, now make Javascript page on your web site in own directory. make a nip of code to update the Javascript code every so many seconds or minutes, and this will make the transition much faster, just by refreshing the content they give.. have fun :) also ultimately you can also send a request at the same time as the first one to translate after char 5000 which should be relatively easy to do.
Some e-Marketing tools claim to choose which web page to display based on where you were before. That is, if you've been browsing truck sites and then go to Ford.com, your first page would be of the Ford Explorer.
I know you can get the immediate preceding page with HTTP_REFERRER, but how do you know where they were 6 sites ago?
Javascript this should get you started: http://www.dicabrio.com/javascript/steal-history.php
There are more nefarius means to: http://ha.ckers.org/blog/20070228/steal-browser-history-without-javascript/
Edit:I wanted to add that although this works it is a sleazy marketing teqnique and an invasion of privacy.
Unrelated but relevant, if you only want to look one page back and you can't get to the headers of a page, then document.referrer gives you the place a visitor came from.
You can't access the values for the entries in browser history (neither client side nor server side). All you can do is to send the browser back or forward a number of steps. The entries of the history are otherwise hidden from programmatic access.
Also note that HTTP_REFERER won't be there if the user typed the address in the URL bar instead of following a link to your page.
The browser history can't be directly accessed, but you can compare a list of sites with the user's history. This can be done because the browser attributes a different CSS style to a link that hasn't been visited and one that has.
Using this style difference you can change the content of you pages using pure CSS, but in general javascript is used. There is a good article here about using this trick to improve the user experience by displaying only the RSS aggregator or social bookmarking links that the user actually uses: http://www.niallkennedy.com/blog/2008/02/browser-history-sniff.html