Creating multiple tasks in a batch using Asana API? - batch-processing

I have written a piece of Google Apps Script code that searches for all Action Items -- AIs in a doc, and creates Asana tasks from them. This is awesome, except for one annoying problem -- it takes a pretty long time 5-10 seconds to assign all tasks, since I am making separate requests for each.
I am trying to see if there is a way to add multiple tasks in a batch request. I've tried looking in the API docs, and it has nothing on this topic, but maybe there is some undocumented way of using the API to do this?
Or if not, please consider this as a feature request, Asana team!

We have some ideas for batch-adding tasks, but you're right, it's not currently available. And no, no secret undocumented endpoint either ... yet.

Related

How to add pagination in API

I want to know how to add pagination in APIs for enhanced data retrieval most specifically the youtube API!
I didn't try out anything so far as it's a new concept towards me!
What i personally do usually is one of two things,
(the most preferred way for me) I create more than one API Token. every X requests i dynamically change the API that executes the request., then it avoids throttlings.
When requesting or sending a large amount of requests, you can stop dynamically every X time.

Is there an API for purging a project in the openstack?

I need to purge my users on an OpenStack project easily, through an API call.
Just like this CLI command :
neutron purge PROJECT_ID
Which is available in the Neutron project docs, but with an API call.
I couldn't find the API, so actually my question is :
1. Isn't there such an API?
if there is not,
2.why?
Is there a specific reason for?
I checked out the source-code of the clients and neutron-server, but unfortunately there is no dedicated endpoint in the REST-API for this functionality.
This feature is only supported by the neutron-client, but not by the openstack-client. When you run neutron purge PROJECT_ID all what the neutron-client does inside the python-code of the client, is to list all resources which are related to the given project and then iterate over this list and send a delete to the neutron REST-API for each single resource. So its only a simple automatism in the python-code of the client and not a specific endpoint on server-side.
See the specific function inside the code here: https://github.com/openstack/python-neutronclient/blob/master/neutronclient/neutron/v2_0/purge.py#L63
Based on my experience with openstack and its community, I think it was done like this, because it was easier to add new code only into the neutron-client. When this should have become a new endpoint, this feature had to be implemented in neutron, openstack-client and openstacksdk as well. Each repository has its own team. This purge-feature is so small, that it was not worth to persuade all 4 teams. The more components you try to update for one simple feature, the harder it is, because the one who wants to bring the feature upstream, is responsible to bring the teams of all required components together and when only one within the core-teams have a problem with your implementation, you have to start nearly at the beginning. So it can easily take over a year or two to bring a cross-component feature like a new endpoint upstream, when you are not part of the core-team by yourself. So to bring the feature only into the neutron-client is quite easy compared to a cross-project contribution.
This is at least the reason, why I would implement this feature only in neutron-client too, or only in the openstack-client if possible, instead of adding a new endpoint, when I would bring this feature upstream.

How do I create/delete item in my DB via API in Cypress?

Can you please give some more examples how do I skip UI and fill in DB with API calls in Cypress?
I am rather new to Cypress, and can't find a solution myself.
Thank you in advance!
Cypress is only going to help you with
... anything that runs in a browser
You need to design this API layer for your test harness. Put simply, Back Door manipulation and Fixture Setup patterns is what your looking for. Combining both will improve the automation UI suites. Adding and reusing such API layer will make your suites fit enough to be part of the product’s daily life, not counting on heavy nightly regressions.
More details in my post on the topic.
Based on the limited amount of information you gave us I am taking a calculated guess on this. First you can easily call an API with .request(). With that you could have your API do whatever it wants to the DB. I am not sure what you mean by SKIP UI. You want to test that so you can't SKIP it, but you could mock the API returns to fill in the information you want.

Creating Multiple Google/YouTube Data API Keys

Is it possible to create multiple API keys for the YouTube Data API?
The majority of Live YouTube Subscriber Counters use loads of different API keys for their counters (as can be seen in their JavaScript code).
The aim of doing so is to not exceed the daily quota limit of 1,000,000 and having to send requests every few seconds per page visited would mean that the limit would be reached very quickly.
How are they able to get away with this?
Here is a SO post to answer your question.
Technically you can run your application using different API Keys it
should work fine. Technically there is nothing wrong with creating
additional projects on Google Developer console. You don't need to go
as far as creating another Google account.

Limits of the Wikipedia API

I read that wikipedia's API is called MediaWiki. My question is regarding this API. Does this API have a maximum of calls per day/ hours / minutes ? I can't seem to find it.
See the wikimedia REST API "Terms and Conditions" for the latest rate limits (200 requests per second in 2022). What do you plan to do with the Wikipedia API?
They state some API:Etiquette and API:FAQ.
There is no hard and fast limit on read requests, but we ask that you
be considerate and try not to take a site down. Most sysadmins reserve
the right to unceremoniously block you if you do endanger the
stability of their site.
If you make your requests in series rather than in parallel (i.e. wait
for the one request to finish before sending a new request, such that
you're never making more than one request at the same time), then you
should definitely be fine. Also try to combine things into one request where you can (e.g. use multiple titles in a titles parameter instead of making a new request for each title
API FAQ states you can retrieve 50 pages per API request.
You can use Data Dumps as well if you need content offline (likely a little outdated).
For a graceful termination of your script in case you hit any of the limits, you can handle errors & warnings in API calls using these status messages.
If there is no need of a "live sample", it would be better to use a data-dump.