Google SpreadSheets Api blocks table after some requests - google-sheets-api

My application sends requests to Google Spreadsheets Api to write values to table.
I can successfully send only 3-4 requests after this table blocks for 6-12 hours and servers sends 503 error (unavailable). I thought that this occurs with basic filter or hyperlinks in data, but without them it doesn't work too.
Also I have another service which sends more request and updates >10 tables without errors.
My question is: May Google block requests to some table cause some specific data or some specific request?

The problem was turned off spreadSheet service on dashboard of google console. Very stupid problem(

Related

How to execute function in backends every x minutes

This is my issue:
I have an API which updates every 30 seconds the result displays on an endpoint
I want to display the result on my website for all visitors and so update it automatically every 30 seconds
I don't want each visitor to send a request on my API as it would overwhelm the API (and it is clearly not the right way to implement it I guess).
Is there a way to send one request every 30 seconds from my website backend to my API in order to display it for all visitors on the website?
Or maybe there is another smarter/efficient way to do it?
Another question I'm asking, I want exactly the same "front" website content for all users. I mean, there would have requests from frontend to backend to have the information, but I don't want some users to have the information earlier as their requests would be few seconds before other users. I was thinking to "send requests based on GMT+1 hour for example", I don't know if it makes sense or if there is another way ?
N.B.: I'm using Wix services, maybe I would have to change and build my proper website
Thanks a lot for your answers
Hugo
Is there a way to send one request every 30 seconds from my website backend to my API in order to display it for all visitors on the website?
You could use realtime communication between your website and your backend server. For example Socket.IO or SignalR depends on the stack you are using. So instead of sending a call every 30 seconds, the API server would dispatch an event to all clients and tell them to update (or send the data along when you dispatch the event).
I want exactly the same "front" website content for all users.
As for your second question, if you opt to use realtime communication to sync data between backend and frontend then there should be no need for having requests sent on a timer as your data is always updated actively from the server side.

How to throttle the amount of request to an API?

There is a crm system, which provides webhooks to notify about some events happening in it (for instance, user creates some entity or moves it to the next 'status'). At some event a webhook triggers my script, that creates a new entity in this crm system using it's API. The system's API has a rate limit – 7 request per second. By violating this rule the access to the API may be restricted for the account for some time.
The problem is, that if user changes 'status' of 1000 entities the webhook triggers my script 1000 times, so it calls the API 1000 times and that may violate the rate limit. Is there any way to temporarily 'store' all requests that came from the webhook and then launch the script no more than n-times per second?
The script is written in php and is located on my Apache server now, but later it may be put on client's server or somewhere else.
I've read about Rabbit MQ and Kafka. But it seems to be an overkill for this task. Or maybe it's OK? I just don't have enough experience with these systems.
Any help would be appreciated.

Distributing API calls to users for web app

We are trying to call one of Google's APIs and analyze the data and then display it for the user on our site. We realize we can make calls to the API through our server using php or python, but because of Google's limit, we are looking for an option that will have the API call come from the user, instead of our server. Is there a way for a web app to distribute the requests amongst the users instead of make all the calls from our server?
Thank you very much as this has been a tricky one to Google.
I highly doubt that this is possible. The API request is per API key and you can't provide every user with their own API key.
If the scenario you are trying to achieve requests possibly the same data for multiple users. I highly suggest you save/cache the API request. This will minimize the times your server send API request to google since you saved that request to your server. You can then set an expiration for your saved/cached API request to renew X days.
Scenario
user 1: request data for item 123 from API.
server: sends request to API server, and save
user 2: request data for item 123
server: return saved API response

Twilio How to collect incoming SMS messages using .net efficiently

I created an application in VB.net that ties into a scheduling software. It keeps our employees up to date by sending them SMS updates. Employees can reply back to us. Sending messages works great. The application uses the Rest API to connect to Twilio. I can also get a list of incoming messages but I can't seem to get it in a way that works well for me.
Currently my application checks if there are new messages every 5 minutes. The application gets the messages list (with filter DateSent>=today) and then loops through the messages and copies the new ones into our scheduling database.
Is it possible to do a more efficient data pull for new SMS messages using VB.net only? Can I include a time filter in addition to current filter DateSent>=today to limit the result set? Any suggestions? (I don't do web coding unfortunately) Thanks.
Twilio evangelist here.
The best way to do this is just to use Twilios web hook to let Twilio proactively tell you each time its received a message. Whats a web hook you ask? Great question.
A webhook is simply an HTTP request that Twilio will make as soon as it received an inbound SMS messages to your Twilio phone number. You normally tell Twilio to make this HTTP request to a URL that you've created and published to a public website, which you can set up easily by using something like ASP.NET. In this scenario you can think of Twilio like a web browser that is making a request to a web application that you have created.
You can tell Twilio what URL it should request by opening the Numbers tab in your Twilio dashboard, and then locating and clicking the phone number you want to configure:
Now you set the URL you want Twilio to request in the Message Request URL field and click Save:
Now when Twilio requests this URL its going to pass a bunch of parameters with its request that you can use in your application logic. You can also do things like return TwiML back to Twilio in response to its HTTP request that tell it to do things like send an SMS right back to the person who just sent one to you.
If you're looking for a bit more of a step by step, the Quickstarts on our website are pretty easy to follow and will walk you through both sending an receiving text messages. The samples are in C# but are pretty straight forward so converting to VB.NET should be easy.
Hope that helps.
I am doing something similar with VB.Net and Twilio. My solution was to put up an Azure web site and an Azure SQL Database (the two can talk to each other). I set up my Twlio to call an .ashx asp.net web page on my Azure web site. Inside of that web page I have code that reads the incoming text message and saves it to my Azure SQL Database.
Works great, but my problem is the Azure database is in "the cloud" and my app\database that sends the original SMS is on mylocal network. Not sure how to cross that divide... (I should add that my local app can read the Azure SQL database, but seems ugly to have to call out to the Azure to get data. Would have preferred to have just saved it in my local db to begin with.)
Probably not a very helpful post, but maybe give you some architectural ideas. If you want to see my .ashx page just let me know.

Fetching data via Facebook connect taking over 10 seconds

Our site uses Facebook connect. When a new user signs up we ask for permission to pull their interest data, their list of friends, and their friends' interests. Fetching this data used to be a very quick process (couple seconds). Over the last week or so, the time to fetch this data has increase to 10+ seconds. According to Facebook insights, our site is not being throttled. We didn't make any changes to our site.
Anyone else experiencing this issue with Facebook? Have any ideas for how to address it?
Thanks!
As of 1/26 at 7:55 PM EST, the live status page doesn't indicate any irregular activity.
Sometimes this occurs because a user simply has a lot of likes and interests. I would recommend making this operation asynchronous following a flow something like this:
User connects with your app
Get the access token and store it in a queue that a background process can access.
Get all the information you need immediately to make the app work.
Some time later
In a background process, grab an access token from the queue, parse it and handle it however you'd like.
A simpler, although less stable option, is redirecting the user to a page upon installation which makes an AJAX request to that page telling it to download the information from the graph. This keeps the response time low, but does require your user to have Javascript enabled and for them to stay on the destination page long enough for the request to be created.