Api call to all itunes stores - api

I'm looking to forward information to my google spreadsheet. Using the documentation for Itunes search on a specific movie, for example I tried, https://itunes.apple.com/search?term=AvengersEndgame&country=US&entity=movie and using a API connector add-on for google spreadsheet, this returned one result for the US store only, based on this movie.
Now adding the same parameters but changing the country code to CA would show the information for the Canada store. https://i.imgur.com/SPEdQj5.jpg
Now using them together as in
https://itunes.apple.com/search?term=AvengersEndgame&country=US&entity=movie
https://itunes.apple.com/search?term=AvengersEndgame&country=CA&entity=movie
https://i.imgur.com/X2GgUhL.jpg
it shows two results, nicely, but not always. Somehow it returns an empty .JSON request and I end up questioning why.
When I try to go furthur and add more stores by adding parameters but changing country letters, I end up recieving more and more empty .JSON requests.
That's when I realized I can only get about 20 calls a minute, I did end up being able to call for about 19 stores.
https://imgur.com/K52onOh with good results.
As there are about a 155 stores in Itunes, I'm looking for a faster way without the hassle of adding 155 of these parameters knowing it will not work as i can only call about 20 results a minute. Could I maybe call in a function to have it search 20 calls every minute untill all 155 are done for which then will be placed in different tabs in a google spreadsheet?
when i try a different movie on Annabelle Comes Home, i end up getting these errors when trying multiple stores.
https://i.imgur.com/CSvMbVu.jpg
using only 1 store for example US, shows a correct result. So what causes the error?
Hope someone can help me with this, as I have the parameters but looking for a way to cover all stores and get them on a spreadsheet which i can then furthur edit for a nice list. As not all stores have to be covered, i've broken down the list to 55 stores.

Related

Mailchimp Archive get more than 20 results

I am using Mailchimp's archive URL in PHP -- I am simply fetching the URL and displaying it as it sits, in order to white lable the funky URL IE
https://us17.campaign-archive.com/home/?u=xxxxxyyyyyxxxxyyyy&id=xxxxyyyyyxxxxyy
In doing so I have read through both the Archive and API documentation, and have found nothing on the parameter for row count. It defaults to 20 as stated in the Archive docs, but I know I have seen archives with a larger row count than that. Is anyone familiar enough with the URL parameters used by MailChimp to increase the row count, to say, 100? IE
https://us17.campaign-archive.com/home/?u=xxx&id=yyy&count=100
It's been a problem for years. Even in 2022 there is still no known way for an end-user to get more than past 20 issues from mailchimp, they simply refuse to add/allow that ability.
However the newsletter creator can go into their backend and generate/enable a javascript API that has the &show= parameter, which can be increased.
https://mailchimp.com/help/add-an-email-campaign-archive-to-your-website/
Again, only the campaign creator can do this, not some random end-user/reader.

Is there a better way to trap Xero API errors?

I am writing some code in vb.net 2013 Express to access Xero accounting via a private application, and it's working fairly well. However I have come across a problem when trying to write some code to upload multiple contacts from a single XML file. I parse the XML, create a new contact from each line, and add it to a list of contacts. I then submit these to Xero:
try
dim sResult = private_app_api.Create(mContacts)
Catch ex As Xero.Api.Infrastructure.Exceptions.ValidationException
' do something with ex to determine what went wrong
end try
If all contacts create correctly, sresult contains a list of those contacts with their Xero-GUIDs, which I then need to feed back up to the system they are being sent from. This all works correctly.
If one or more of the contacts does not create for some reason, I get a list of one or more errors in the ex.ValidationErrors() collection, but I get nothing in sresult. So, I don't have a reference back to those that have worked, only those that have not.
To get around this, I am looping through each contact and pre-checking that they don't already exist on Xero, and don't have a duplicate name. This also works, and means that I only submit contacts that I know are not already on Xero.
My worry now, though, is that I am going to run into the Xero API limits of 60 calls in a rolling 60-second window. I am trying to make the code robust by pre-checking most of the common things that could cause a problem, but every time I do that, I get closer to the limit, which in theory means that I need to add some complexity by trying to throttle calls to Xero.
Is there a better way that I can call .create() and get both the successful information and the error information?
I think the way around this seems to be to add a reference to RateLimiter when I first create the API object. This appears to implement a means where any calls that would exceed the rate limit are automatically paused. It seems necessary, though, to set the limit a little lower than 60 per 60-second rolling window, as I still get rate errors at that. I set it to 50, and my test code now waits a little while once it runs over the limit.
I haven't figured out how to implement both the 60/60s limit and the 5000/24h limit, though.

How do I search this? Possible to access more than 100 JSON api search results if I pay for it?

How to search this?
I want to be able to:
1. create a search engine
2. programatically search it thorugh an API (python, or other)
3. paginate through the results (all of them, if I chose)
4. store URL's or results that I want.
Is this even possible with Google Custom Search Engine?
I enabled billing, my CC is up to date with Google, I do steps 1..3 above.
On a search, I will get back 4,000 results for example, but I can only access 10 at a time with the API, none more, and when I reach 100 results I am shut off.
I want to be able to process 1000 results if I wish.
Before you reply, do you personally have working code that goes beyond the 100 limit?
If so, would be very much interested in speaking, learning how you did it.
I am using Python at the moment, but it could be any language.
--
I tried using the &start=100, 200, and so on to paginate through, but this does not work.
I tried getting 100 results in a python script, ending the program, calling it again setting start=100 (after the first set returned), and nothing happened.
I want to be able to use the Google Custom Search API, pay Google for a monthly subscription but have not found that this is possible.
For any given search, I want to decide how many results to process, could be 1K, could be 20K, I simply need/want access to the full result set, but I do not, have not seemed to find a way to do this.
The API allows only a max result depth of 100. See https://developers.google.com/custom-search/v1/cse/list

I have a list of 35,000 names of companiess. I need to perform an internet search and return the url of the first result .

I have a list of 35,000 names of companies. I need to perform an internet search and return the first result . I would like to automate the process. I was orginally thinking about using IE automation in excel. However, I am not sure if there is a better approach. I need to google search the company's name and return the URL of the first result. If the results could be in excel, that would be great as the list is in excel. Any thoughts?
You can automate IE to perform the search. Normally, a delay is set up to keep checking if the page is loaded. I suggested recently that a better alternative is to keep trying to reference the first H3 element in the page, and read it's contained A-tags' href attribute. If this succeeds then Stop navigating the page, or navigate to the next page.
There is a slight catch with this approach though, as the H3 is often a sponsored-link. It is possible to determine this though, and so find a link further down.
The whole process will still take an age though.
Alternatively, there is the Google Search API but you'll have to pay for more then 100 searches per day.

Getting all listing images from an Etsy shop

THE SITUATION
I've been tooling around in the Etsy sandbox API trying to figure out a solution for a client who wants to show the default image and title to all their Etsy listings. Upon clicking, they want it to direct them off the website and onto that Esty listing's page.
Now, figuring out how to get the name and url of all their listings was easy and can be done in one public API call:
http://openapi.etsy.com/v2/shops/:shop_id/listings/active?method=GET&api_key=:api_key
This call will not only return the name of the listing and the url of listing, but also a multitude of other information on that particular item. I suppose I should limit my call to just getting the fields I need, but for sake of example, I digress...
What surprises me most is that what is not included in that gigantic array of information is something I'd expect to find in there: the images associated with the listing or at least the main image. There is however a separate API call I can make to get the images for a single listing, but that would require getting the listing_id and making a separate API call for each item. This now turns what I would expect to be one (or hell, even two) calls to the Etsy API, into 1 plus however many items you return. Granted if you have 100 items you're selling in a shop, that's 101 API calls in just a few seconds! Call me crazy, but I feel there's got to be a better way to do this than what I've found.
THE QUESTION
What is the easiest way to make an Etsy API call to return all the images (or even the main image) for all the listings in a shop?
I ended up using the following code to include everything I needed into one API call:
http://openapi.etsy.com/v2/shops/:shop_id/listings/active?method=GET&api_key=:api_key&fields=title,url&limit=100&includes=MainImage
This way I defined my fields so I don't have unnecessary information, but I also set a limit on the results and used includes=MainImage as a query string. This was to the suggestion of a member of the Etsy developer community.