I am trying to determine changes in the Google+ network in an efficient manner (profile changes). My first idea was to use the eTags of the People.List and People.Get. My assumption was that the eTag in the List (person) would be the same as the one in the Get. This is not the case!
I rather not want to get the details of all the people in the network and checking the eTag for each of them. I will run out of daily api-calls very quickly using that scenario.
Are there any other ways of determining the changes in the network?
Thanks!
I'm not aware of a way to notify your service when changes occur on a user's profile. I don't think that etags will work for what you are trying to do and the client libraries should already be using the etags to manage any query caching. You can perform a few tricks to make queries lighter on your backend though:
Batch API calls
Use a fields filter to just only get the data that matters for your application
If you are running out of quota, you can also request to have your limits raised from the Google APIs console by clicking the Quotas link on the left. The developer relations team from Google+ checks the request regularly and will raise your quota limits if your usage justifies it.
Related
Let’s say I have a client who has spent a lot of time and money creating a custom database. So there is a need for extra data security. They have concerns that the information from the database could get scraped if they allow access to it from a normal web app. A secure login won’t be enough; someone could log in and then scrape the data. Just like any other web app, a PWA won't protect against this.
My overall opinion is that sensitive data would be better protected on a hybrid app that has to be installed. I am leaning toward React-Native or Ionic for this project.
Am I wrong? Is there a way to protect the data from being scraped in a PWA?
There is no way to protected data visible to browser client regardless of technology - simple HTML or PWA/hybrid app.
Though you can make it more difficult.
Enforce limits on how many information a client can fetch per minute/hour/day. The one who exceed limits can be blocked/sued/whatever.
You can return some data as images rather than text. Would make extraction process a bit more difficult but would complicate your app and will use more bandwidth.
If we are talking about a native/hybrid app it can add few more layers to make it more secure:
Use HTTPS connection and enforce check for valid certificate.
Even better if you can check for a specific certificate so it's not replaced by a man-in-the-middle.
I guess iOS app would be more secure then Android as Android is easier to decompile and run modified version with removed restrictions.
Again, rate limiting seems to be the most cost effective solution.
On top of rate limiting, you can add some sort of pattern limiting. For example, if a client requests data with regular intervals close to limits, it is logical to think that requests are from a robot and data is being scrapped.
HTTPS encrypts the data being retrieved from your API, so it could not be 'sniffed' by a man in the middle.
The data stored in the Cache and IndexedDB is somewhat encrypted, which makes it tough to access.
What you should do is protect access to the data behind authentication.
The only way someone could get to the stored data is by opening the developer tools and viewing the data in InsdexedDB. Right now you can only see a response has been cached in the Cache database.
Like Alexander says, a hybrid or native application will not protect the data any better than a web app.
I am wanting to pull all users in my company dropbox and then check to see if their accounts have MFA enabled. I read over the documentation for Dropbox api but did not see anything stand out where this was possible.
It's very sad to realize that a popular platform such as Dropbox doesn't expose A LOT of basic features through its API (and the SDK itself is far from being OK, compared to G-Suite). Anyway, there are two hacky methods you can use in order to pull out that information (with some limitations).
First method:
By analyzing the team events using team_members_list() you can filter out tfa_change_status_details events. When new_value=TfaConfiguration('[sms|other]', None) is specified - 2FA is enabled.
The information I found out that can be retrieved using this method is:
has_2fa - whether 2FA was ever configured.
is_tfa_enabled - whether 2FA is currently enabled.
tfa_type - whether 2FA is by SMS or by app.
However, keep in mind that you have to track changes constantly and also keep in mind that Dropbox saves team events for only two years.
Second method:
Using the front-end dashboard API this information can be retrieved (I can't remember the API name, I think that it is /2/get_multifactor and inside you'd find some information about its status and the organizational policy regarding 2FA). However, to use the front-end dashboard API (which is totally undocumented) you'd need to simulate a successful login (and correctly use the lid and jar cookies) and you'd also need to bypass the random captcha that appears when you abuse the service with too many requests.
To be honest, Dropbox's API is weak, neglected, and ugly. I wish I never had to use it. Anyway, I would recommend using the first method and pray for a significant update to the API
No, unfortunately the Dropbox API doesn't expose this. We'll consider it a feature request.
There's a feature request open for this one (https://www.dropboxforum.com/t5/Dropbox-API-Support-Feedback/MFA-status-for-users/m-p/468564#M23886). But I wouldn't hold your breath, as #Aviv mentioned the Dropbox API seems surprisingly neglected at the moment.
Foursquare encourages developers to use maximum caching before doing repetitive calls to Foursquare API in order not to extend hourly limit of usage (5000 requests / hour).
So, does it mean is it a bad idea to access Venues API directly from mobile app?
Do we need to make our mobile app retrieve results from our server instead of calling Foursquare directly?
Thanks
Accessing Foursquare using your own servers as a proxy is generally encouraged. As you stated, it's better for caching, but also has the advantages of
being able to make changes quickly without waiting for an app submission (and can therefore propagate to all your users, even ones that don't actively upgrade)
ability to collect and aggregate information about call volume, errors, etc. more easily
I have read the docs for SkyDrive REST APIs but didn't find any API using which i can sync with the SkyDrive, without recursive polling the folders for update check.
Is there any API to get only the update for a user Drive?
A commonplace reality of epistemology is that...
It is typically much easier to prove that something exists than to prove that it does not exist
Never the less I can say with a high level of confidence that the official REST API for Skydrive doesn't include a way of getting a list of updated documents for synchronization purposes.
Furthermore I didn't see any evidence of a non-supported/non-official API that would serve this purpose and by observing the way the Windows Client for SkyDrive interacts with the server (within limit of fair-use reverse engineering), it appears that the synchronization is done by reviewing the directory tree rather than getting a differential list.
I believe the closes you can go is: Get a list of the user's most recently used documents
To get a list of SkyDrive documents that the user has most recently
used, use the wl.skydrive scope to make a GET request to
/USER_ID/skydrive/recent_docs, where USER_ID is either me or the user
ID of the consenting user. Here's an example.
GET http://apis.live.net/v5.0/me/skydrive/recent_docs?access_token=ACCESS_TOKEN
My goal is to synchronize a web-application with an internal database. The web-application has a public API, but in order to fully synchronize the two sources I would need to make around 2000 separate API calls every time. My instinct tells me that this is excessive and possibly irresponsible, but I lack the experience to know for sure.
In this particular case the web-application is Asana, but I've encountered similar situations before with other services. Is there any way to know if you're abusing a service through excessive API calls? I know I'm not going to DOS a company like Asana, but I can't shake the feeling that there must be a better way than making ~150k requests per day.
The only other option I can think of is to update the web-service only when I know there's been a change in the database, but I'll lose a lot of capability that way.
I apologize for the subjectivity of this question, but I'm really hoping that someone can explain if there's any kind of etiquette that's expected when using public APIs.
(I work at Asana)
This is an excellent question, or rather set of questions.
You are designing a system that will repeatedly make requests for every object. What will happen as the number of objects grows? Even if your initial request rate were reasonable, this would suffer problems with scalability. A more scalable solution is one that scales with the number of changes in the system. This will also grow over time, but much more slowly - the number of changes a single user can make per day is relatively constant, but the total number of objects they've created over time grows and grows. So my first piece of advice would be to avoid doing things this way, and instead find a way to detect changes and just act on those. It would be interesting to know why you feel you'll lose capability by taking this approach.
Now, I happen to know that the Asana API does not currently provide you with any friendly mechanism to just detect changes in the system. This is a commonly requested feature and we are looking into it, though I unfortunately cannot promise a delivery date. So you might be left with no choice but to poll our system for now.
As for being polite to the API, many service providers set limits on their API usage to prevent accidental or malicious use of the API from impacting the service to their other customers -- Asana is no exception. Sometimes these limits are published, other times not, and there is no standard limit: it all depends on the service. But it is very thoughtful of you to be curious about service limitations.
That said, 150k requests per day is, for the Asana API, kind of a lot. If all of our API users gave us that much traffic, we might be serving more requests per day than Google Web Search, and we're not quite that scalable yet. :) Technically, sometimes, we might handle requests at that volume from a single user.
If you must poll, try to poll on intervals like 15 minutes. But please do not poll your entire workspace on this time period; it's likely to be too much traffic/data. We're working on trying to provide you with a better solution.
If you do happen to make too many requests of the Asana API, you will get back HTTP status code 429 instead of your desired response; you can read more about that here (https://asana.com/developers/documentation/getting-started/errors).