Fetching data via Facebook connect taking over 10 seconds - api

Our site uses Facebook connect. When a new user signs up we ask for permission to pull their interest data, their list of friends, and their friends' interests. Fetching this data used to be a very quick process (couple seconds). Over the last week or so, the time to fetch this data has increase to 10+ seconds. According to Facebook insights, our site is not being throttled. We didn't make any changes to our site.
Anyone else experiencing this issue with Facebook? Have any ideas for how to address it?
Thanks!

As of 1/26 at 7:55 PM EST, the live status page doesn't indicate any irregular activity.
Sometimes this occurs because a user simply has a lot of likes and interests. I would recommend making this operation asynchronous following a flow something like this:
User connects with your app
Get the access token and store it in a queue that a background process can access.
Get all the information you need immediately to make the app work.
Some time later
In a background process, grab an access token from the queue, parse it and handle it however you'd like.
A simpler, although less stable option, is redirecting the user to a page upon installation which makes an AJAX request to that page telling it to download the information from the graph. This keeps the response time low, but does require your user to have Javascript enabled and for them to stay on the destination page long enough for the request to be created.

Related

How to get Delphi to read a logged-in webpage from the default browser - not twebbrowser

I am trying to read data from a webpage that requires a login. I could use twebbrowser and have the user login through that, however, the point is to not allow my app to handle any security credentials, even through twebbrowser. My hope is that the user would login on their default browser, and then my app would load the page as a logged-in user without any credentials going through my app. I swear there was a time, many moons ago, when I was able to do this. However I can't seem to get it to happen now. Is this possible? Or any other suggestions for connecting to a logged-in website without the credentials going through my app? Thanks in advance.
Additional info:
What I am trying to do is write an app that reads purchase history from a rapid-fire sale website (new item every few minutes) and keeps a running total in real time while also warning the user if something comes up for sale that they've already purchased (because the web site repeats things that didn't sell out). I prefer to keep my app as only a data aggregator, ie read-only, completely separated from logging in, purchasing, etc. I don't want people worrying about entering their password or credit card in my app.

Google one tap sign up/sign in approval request

I requested one tap signup/signin for website API as documented in https://developers.google.com/identity/one-tap/web/
However, there has been no update or hear an update from them for more than a week. Does anybody know how long it usually takes to get the request reviewed?
Thanks
I got mail back from them after around 3 weeks of submitting the form.
They said currently you won't be granted access to API. However, they are in the process of including some additional security from their side and it may take a few months. We can hope to get API access then.

Does New Relic real time monitoring assume a single user logged in to a web application?

I wanted to know if I get different results in New Relic real time user monitoring when many users are logged into the application concurrentky? Or the only way to achieve that is to use a load testing tool?
You will likely see different results when more people are using your site at once.
The JavaScript injected for Real User Monitoring (RUM) collects timing information in the browser that contains details to identify the specific app and the web transaction processed on the backend, as well as how time was spent in the app for each request. When a page completes loading in an end user’s browser, RUM sends the information back to New Relic asynchronously, so it doesn’t affect page load time. RUM uses the IP address to resolve the geographic location of each request.
For more information on this see, how real user monitoring works, on the New Relic knowledge base.

Design for getting Twitter friends list for large user base and managing rate limiting

Assume there's a mobile app and a server.
I have question about rate limiting and hoping someone can give some advice on a design as I'm banging my head on how to navigate around rate limit. There must be something I"m missing because the 150 unauthenticated rate limit per IP per hour is extremely low.
Imagine the scenario I want to build is the following (simplified into a trivial example for this discusion). Assume user is signed into Twitter for this entire discussion to remove discussion about oAuth.
Mobile talks to our service to show users twitter friends list. Every time the mobile app is loaded, it will show the entire friends list, and highlighting the new friends that were added within the last 2 days.
That's it. But the trick is that I want to ensure that the friends list is always up to date in the client, which means our server has to have the most recent up to date friends list.
Periodically, I want my server to automatically scan the Twitter friends list for every user of my app to see if new friends have been added.
Our initial design was getting our server to do all the work with this flow:
New User signs in on client, gives access token to server
Server makes call to Twitter REST APIs to get initial friends lists
Server stores the Twitter Friends IDs and shows responds to the client with that list.
Periodically (e.g. every 48 hours), server checks Twitter REST APIs for friends list for each user and compares it to our cached Twitter friends list we have for them to see who is new and to highlight in the mobile app.
The good thing about this is that all the interaction with twitter to get friends list, compare and peridiocally refresh is on the server. Mobile client just makes a single call to my server and gets friends list.
The problem with this design is that it will work for a single user, but since the rate limit is 150 per hour on un-authenticated calls, I will hit my limit as soon as 151 users user my service (which has a fixed IP).
The only solution I can see is to have the client do the work for each user, then send me the friends list which my server caches. This takes care of Step #2 above. However, for Step #4, I'd have to build something into the client to auto refresh twitter friends and send back to the server.
This is super clumsy to have the client involved at all in this Twitter friends list operation.
At first I thought I was crazy and the public unauthenticated APIs like getting friends lists wouldn't be subject to rate limiting. However, according to their docs, it is.
Am I missing something obvious or is the only way to solve this is to put heavy logic into the client?
With whitelisting gone for those that aren't grandfathered or Twitter business partners, I don't think you have any alternative but to have your mobile app do the Twitter API calls from the handset.
Having the handset call Twitter isn't a bad thing by any means. Pretty much every Twitter client in the world does it. One benefit will be that the user will be authenticated to Twitter, and thus her full 350 calls per hour will be available to you. Keep in mind, however, that you should minimize your calls since the user may have other Twitter-aware applications installed on her handset eating into your call allotment, and vice versa.
Now to the solution. The way I would implement your use case would be to first fetch the complete list of friends for your user by calling the friends/ids method.
http://api.twitter.com/1/friends/ids.json?screen_name=yourUsersName
The above call will return the most recent 5,000 friend IDs, in order followed, for #yourUsersName. If you want to fetch more friend IDs than the first 5,000, you'll need to specify the cursor parameter to initiate paging.
Next, I would check the latest list of friends we just fetched against the list on the handset, syncing them by removing any IDs that are no longer present, while adding any that are new.
If we only need the friend IDs, then we're done at a cost of one API call per 5,000 friend IDs. If, however, we need to get user info for these new friends as well, then I would call users/lookup and pass in the list of all new users that we discovered while syncing friend IDs. You can request up to 100 user objects at a time.
http://api.twitter.com/1/users/lookup.json?user_id=123123,5235235,456243,4534563
You user must be authenticated in order to make the above request, but the call can fetch any Twitter user profiles you wish -- not just those that are friends of the authenticated user.
So, let's say for example that a user has 2,500 friends and has never used your app before. In that case, she would burn one call to fetch all of the friend IDs, and 25 calls for her friends' information. That's not too bad to get the app populated with data.
Subsequent calls should be more streamlined with probably only two calls burned (one for the IDs, and one to get the new friends).
Finally, once the data has been updated on the handset, the deltas for the IDs and user data can be gathered up and pushed to your server.
It may even be possible that your server application won't even have to interface with Twitter at all, and that should alleviate the 150 user limit you are encountering.
Some final notes:
Be sure to note in your app's privacy policy that you sync your user's friend list with your server.
I recommend specifying JSON as the return format for all Twitter API calls. It is a much more lightweight document format than XML, and you will typically transfer only about 1/3 to 1/2 as much data over the wire.
Pick a Twitter framework appropriate for your mobile device and your programming language. Twitter access is a commodity these days, and there's little to no reason to reinvent how to access the Twitter API.
I answered a similar question about an approach for efficiently fetching followers here.
Since you are making request on behalf of users you should make those requests be authenticated as those users. Then requests will count against each users own pool of 350 requests/hour.

Twitter API for searching a user's friends?

I'm working on an app that allows users to search for a particular friend on Twitter (and eventually Facebook) and then send them a message (sort of).
My problem is, the API limits me to only getting 100 friends per request. For a user with a lot of friends, this could take many requests (even if I cache it) and will make my app hit the rate limit pretty quickly.
Is there an official (or unofficial) Twitter API for searching for only your friends?
The solution I have implemented for now is this: whenever a user logs in, iterate through each 100 block of friends and put them in the Rails.cache. They stay there until the user logs out and logs back in. Now that I know that the API requests are counted against the logged in user, I shouldn't need to worry about hitting the rate limit API since each user will have 350 requests per hour.
However, I have found a few problems with this, and I have a few thoughts on solutions:
Problem: We are storing a large amount of data to cache someone's friends.
Solution: It would be best if we could cache all twitter users who are friends of one of our users in one object (or hash) and also cache only the IDs of the friends for each user (which can be grabbed with far less API calls). This would create a bit of a slowdown, but would be far less storage required. Then, whenever a user logs in, we would simply update the global friend cache with any changes (i.e. picture, name, etc.).
Problem: My application still has to store this and figure out how to parse it; it's not very organized.
Solution: Extract this functionality into a new application that creates a better API for searching. If I accomplish this, I'll post an update here with a link.