Distributing API calls to users for web app - api

We are trying to call one of Google's APIs and analyze the data and then display it for the user on our site. We realize we can make calls to the API through our server using php or python, but because of Google's limit, we are looking for an option that will have the API call come from the user, instead of our server. Is there a way for a web app to distribute the requests amongst the users instead of make all the calls from our server?
Thank you very much as this has been a tricky one to Google.

I highly doubt that this is possible. The API request is per API key and you can't provide every user with their own API key.
If the scenario you are trying to achieve requests possibly the same data for multiple users. I highly suggest you save/cache the API request. This will minimize the times your server send API request to google since you saved that request to your server. You can then set an expiration for your saved/cached API request to renew X days.
Scenario
user 1: request data for item 123 from API.
server: sends request to API server, and save
user 2: request data for item 123
server: return saved API response

Related

How to execute function in backends every x minutes

This is my issue:
I have an API which updates every 30 seconds the result displays on an endpoint
I want to display the result on my website for all visitors and so update it automatically every 30 seconds
I don't want each visitor to send a request on my API as it would overwhelm the API (and it is clearly not the right way to implement it I guess).
Is there a way to send one request every 30 seconds from my website backend to my API in order to display it for all visitors on the website?
Or maybe there is another smarter/efficient way to do it?
Another question I'm asking, I want exactly the same "front" website content for all users. I mean, there would have requests from frontend to backend to have the information, but I don't want some users to have the information earlier as their requests would be few seconds before other users. I was thinking to "send requests based on GMT+1 hour for example", I don't know if it makes sense or if there is another way ?
N.B.: I'm using Wix services, maybe I would have to change and build my proper website
Thanks a lot for your answers
Hugo
Is there a way to send one request every 30 seconds from my website backend to my API in order to display it for all visitors on the website?
You could use realtime communication between your website and your backend server. For example Socket.IO or SignalR depends on the stack you are using. So instead of sending a call every 30 seconds, the API server would dispatch an event to all clients and tell them to update (or send the data along when you dispatch the event).
I want exactly the same "front" website content for all users.
As for your second question, if you opt to use realtime communication to sync data between backend and frontend then there should be no need for having requests sent on a timer as your data is always updated actively from the server side.

soundcloud api how get clientId

On page https://developers.soundcloud.com/
Register a new app (Currently unavailable)
A question for developers. How soon will fix the registration of apps?
Is it possible to register my application in manual mode?
I need my clientId!
Apparently they haven't given out new client IDs for months (another user says so).
To answer your question though, no. As Soundcloud says when you attempt to register an app, "...we will no longer be processing API application requests at this time."
I'm trying to use the API too. It seems like we either need to use client IDs belonging to other people, wait an unspecified period of time, or there may be select uses of their API that don't require a client ID.
It depends on what kind of content you want to access.
If you need only public content than you can obtain the CLIENT ID from your AJAX requests made to soundcloud. Open network debugger in your browser and explore AJAX requests you should see some queries made to api.... and the query string should contain client id:
https://api-v2.soundcloud.com/tracks/687584815/playlists_without_albums?offset=85&limit=5&client_id=xxxxxxxxxxxxxxxxxxxx
Hope that helps.

Authenticating users from separate devices to be be able to pull data from an API

I'm having trouble deciding on / understanding which method of authentication would be best in the following situation:
I have 3 separate "clients". A website, mobile app and browser extension.
Users information and data is stored in a database.
The 3 clients will access the data via an API.
What I am trying to get my head around is how users of the system would login via one of the three clients and authenticate with the API so they can then proceed to get and post data to the API.
I do not require 3rd party applications to access the API. Users can only access it by logging into the clients I provide. For this reason am I right in thinking that OAuth1/2 would be over kill?
I have attached an image which details how I envision the system.
An additional question:
Where is it in the system the authentication comes in? Would I authenticate within the API? So the user uses a form on one of the three clients to send a password. The API then returns a "what" if they provide valid credentials?

Twilio How to collect incoming SMS messages using .net efficiently

I created an application in VB.net that ties into a scheduling software. It keeps our employees up to date by sending them SMS updates. Employees can reply back to us. Sending messages works great. The application uses the Rest API to connect to Twilio. I can also get a list of incoming messages but I can't seem to get it in a way that works well for me.
Currently my application checks if there are new messages every 5 minutes. The application gets the messages list (with filter DateSent>=today) and then loops through the messages and copies the new ones into our scheduling database.
Is it possible to do a more efficient data pull for new SMS messages using VB.net only? Can I include a time filter in addition to current filter DateSent>=today to limit the result set? Any suggestions? (I don't do web coding unfortunately) Thanks.
Twilio evangelist here.
The best way to do this is just to use Twilios web hook to let Twilio proactively tell you each time its received a message. Whats a web hook you ask? Great question.
A webhook is simply an HTTP request that Twilio will make as soon as it received an inbound SMS messages to your Twilio phone number. You normally tell Twilio to make this HTTP request to a URL that you've created and published to a public website, which you can set up easily by using something like ASP.NET. In this scenario you can think of Twilio like a web browser that is making a request to a web application that you have created.
You can tell Twilio what URL it should request by opening the Numbers tab in your Twilio dashboard, and then locating and clicking the phone number you want to configure:
Now you set the URL you want Twilio to request in the Message Request URL field and click Save:
Now when Twilio requests this URL its going to pass a bunch of parameters with its request that you can use in your application logic. You can also do things like return TwiML back to Twilio in response to its HTTP request that tell it to do things like send an SMS right back to the person who just sent one to you.
If you're looking for a bit more of a step by step, the Quickstarts on our website are pretty easy to follow and will walk you through both sending an receiving text messages. The samples are in C# but are pretty straight forward so converting to VB.NET should be easy.
Hope that helps.
I am doing something similar with VB.Net and Twilio. My solution was to put up an Azure web site and an Azure SQL Database (the two can talk to each other). I set up my Twlio to call an .ashx asp.net web page on my Azure web site. Inside of that web page I have code that reads the incoming text message and saves it to my Azure SQL Database.
Works great, but my problem is the Azure database is in "the cloud" and my app\database that sends the original SMS is on mylocal network. Not sure how to cross that divide... (I should add that my local app can read the Azure SQL database, but seems ugly to have to call out to the Azure to get data. Would have preferred to have just saved it in my local db to begin with.)
Probably not a very helpful post, but maybe give you some architectural ideas. If you want to see my .ashx page just let me know.

Is the Twitter Search API affected by the recent Twitter API changes?

I've been building an app which allows the user to search through recent (i.e. 6-9 days worth) public tweets on Twitter using the Twitter Search API.
Currently, the site is entirely public - that is, users do not need to sign in to Twitter (or even be Twitter users at all) to use my app.
However, the upcoming changes to the Twitter API have left me confused, particularly the fact it would appear that every request to Twitter's API will need to be authenticated.
My limited understanding of how Twitter's API works is that I need to authenticate my app using OAUTH, which in turn means that, if I want to continue accessing the Twitter Search API, users will need to sign in to my site before they can use the functionality related to the Search API - hence, only Twitter users will be able to use that section of my app.
Am I understanding this correctly, or is the Twitter Search API exempt from the changes? If I authenticate my app, does this mean the rate at which users can search Twitter status updates through my app is increased (or any other advantages over having non-authenticated apps)? Note that I am currently implementing a caching feature to cache related searches.
Thanks!
The changes to the Twitter API would affect your application depending on how your application works. These are the changes that you should be aware of:
All requests used to be anonymous. Now, all requests must be authenticated via OAuth.
With the old rate limits, according to my tests, you where able to make about one request per second per IP address. Now you can make 180 requests per 15 minute block per authenticated user (1 request every 5 seconds on average).
Not related, but still worth mentioning, the data that the new API returns is more similar to the data that the Streaming API returns. It's much more complete.
So, according to these changes, if your application uses some kind of a bot which polls the Search API, stores the results into a database, and then your users search within these stored results; you will have to implement OAuth with your own access token, which you can get by creating an application at dev.twitter.com.
But, if your application connects to the Search API every time that your users interact with it, and you think that you will have to make more than one request every 5 seconds on average, then you will have to ask your users to authenticate in order to get their access tokens for your requests.